Hi, this is Canyu Chen (陈灿宇), a Computer Science Ph.D. student at Illinois Institute of Technology (IIT) since Fall 2021. Before joining IIT, I received my B.S. in Computer Science from the University of Chinese Academy of Sciences (UCAS) in 2020. I am actively looking for a new Ph.D. position starting from Fall 2025 because my previous advisor has moved to another university. Please feel free to contact me if you are interested in my research. (email: cchen151 AT hawk.iit.edu or wechat ID: alexccychen)
I focus on Truthful, Safe and Responsible Large Language Models with the applications in Social Computing and Healthcare. I have started and led the LLMs Meet Misinformation initiative, aiming to combat misinformation in the age of LLMs. I co-led the Editing LLMs initiative, aiming to explore and understand knolwedge editing in LLMs. I am also one organizer of the OracleLLM community, dedicated to exploring and advancing the concept of LLMs-as-Oracles. I aim to pursue Safe and Aligned Artificial General Intelligence in the long run. I am always happy to chat and discuss potential collaborations, or give talks about my research in related seminars.
News
- [11/2024] New preprint is online: From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge, more details: [project website] [Paper List on GitHub]
- [11/2024] Guest lecture titled "Combating Misinformation in the Age of LLMs" for the course "CS 585 Natural Language Processing" instructed by Prof. Jacek Dzikowski at Illinois Institute of Technology. [Slides]
- [11/2024] Invited by Prof. Xiaotian (Max) Han to give a guest lecture "Combating Misinformation in the Age of LLMs" for the course "CSDS 600 Introduction to Artificial Intelligence" at Case Western Reserve University. [Slides]
- [11/2024] Deeply honored to be recognized as an outstanding reviewer at EMNLP 2024.
- [11/2024] New preprint is online: ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?, more details: [project website] [Code and results on GitHub]
- [11/2024] Invited by Prof. Zhen Xiang to give a guest lecture "Combating Misinformation in the Age of LLMs" for the course "CSCI 8000 Advanced Special Topics in Computer Science" at University of Georgia. [Slides]
- [11/2024] Invited to give a talk "Can Large Language Model Agents Simulate Human Trust Behavior?" at Prof. James Evans's Knowledge Lab in the University of Chicago. [Slides]
- [10/2024] Invited by Prof. Tianhang Zheng to give a talk titled "Combating Misinformation in the Age of LLMs" at Zhejiang University. [Slides]
- [10/2024] New preprint is online: Can Knowledge Editing Really Correct Hallucinations?, more details: [project website] [Code and results on GitHub]
- [10/2024] Invited by Prof. Ninghao Liu to give a guest lecture "Combating Misinformation in the Age of LLMs" for CSCI 8265: Trustworthy Machine Learning at University of Georgia. [Slides]
- [10/2024] Two new preprints are online: FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks and FairMindSim: Alignment of Behavior, Emotion, and Belief in Humans and LLM Agents Amid Ethical Dilemmas
- [09/2024] Our paper Can Large Language Model Agents Simulate Human Trust Behavior? is accepted to NeurIPS 2024, more details: [project website] [Code and results on GitHub]
- [09/2024] Our paper Can Large Language Models Identify Authorship? is accepted to EMNLP 2024 Findings, more details: [project website] [Code on GitHub]
- [09/2024] New survey paper Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges is accepted to SIGKDD Explorations 2024, more details: [project website] [Paper list on GitHub]
- [08/2024] I will give a Research Spotlight oral presentation titled "Combating Misinformation in the Age of LLMs" at The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI) [Slides] [YouTube]
- [07/2024] New preprint is online Can Editing LLMs Inject Harm?, more details: [project website] [Code, Results, Dataset on GitHub]
- [07/2024] New preprint is online MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?, more details: [project website] [Code on GitHub] [Models, Datasets, and Leaderboard on huggingface]
- [06/2024] Our paper Evaluating the Social Impact of Generative AI Systems in Systems and Society is forthcoming in Oxford Handbook on the Foundations and Regulation of Generative AI (Oxford University Press).
- [06/2024] Invited by Prof. Yuxuan Liang to talk at Swarma Club: "Can Large Language Model Agents Simulate Human Trust Behaviors?". [Slides]
- [05/2024] Invited to talk at KAIST/IBS Data Science Group: "Combating Misinformation in the Age of Large Language Models (LLMs)". [Slides]
- [04/2024] Our survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges is accepted to AI Magazine 2024. [publication] [paper list]
- [03/2024] Deeply honored and humbled to receive the prestigious 🏆 Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. Thanks to Illinois Tech Today for the coverage.
Older News
- [02/2024] New preprint is online Can Large Language Model Agents Simulate Human Trust Behaviors?, [project website] Code and results have been released for verification. [code and results] Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo].
- [01/2024] Can LLM-Generated Misinformation Be Detected? is accepted to ICLR 2024, [project website] [dataset and code].
- [12/2023] Honored to receive 🏆 Didactic Paper Award (1/35 of all accepted papers) in workshop ICBINB@NeurIPS 2023 for Can LLM-Generated Misinformation Be Detected?.
- [10/2023] Start an initiative LLMs Meet Misinformation along with a new survey paper Combating Misinformation in the Age of LLMs: Opportunities and Challenges, [project website] and a paper list collecting related papers and resources [paper list].
- [10/2023] Honored to be covered by Illinois Tech News on the research of Trustworthy AI, [IIT News].
- [09/2023] New preprint is online Can LLM-Generated Misinformation Be Detected?, [project website]. The dataset and code are released [dataset and code].
- [06/2023] Will attend FAccT 2023 as a volunteer. Welcome to Chicago and glad to connect!
- [05/2023] One paper accepted at EACL 2023 and will attend online. Welcome to our poster!
- [04/2023] Glad to be invited by Prof. Lu Cheng to give a talk on AI Fairness at UIC [Slides]
- [11/2022] Attend NeurIPS 2022 in person. See you at New Orleans!
- [08/2022] Attend KDD 2022 in person. Glad to meet old friends and make new friends!
Publications
2024
-
ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?
Canyu Chen*, Jian Yu*, Shan Chen, Che Liu, Zhongwei Wan, Danielle S. Bitterman, Fei Wang, Kai Shu (*equal contributions)
Presented in AMIA NLP Working Group Pre-Symposium 2024, Oral and workshop GenAI4Health@NeurIPS 2024.
[arxiv] [project website] [Code and results on GitHub] -
Can Knowledge Editing Really Correct Hallucinations?
Baixiang Huang*, Canyu Chen*, Xiongxiao Xu, Ali Payani, Kai Shu (*equal contributions)
Presented in workshop Safe Generative AI@NeurIPS 2024.
[arXiv] [project website] [Code and results on GitHub]
Invited Talks : [Guest Lecture for CSCI 8265: Trustworthy Machine Learning@UGA] [Zhejiang University] [Guest Lecture for CSCI 8000 Advanced Special Topics in Computer Science@UGA]
🏆 Award: Top #2 Paper of the day at HuggingFace AK Daily Papers (Oct.25, 2024). -
Can Large Language Model Agents Simulate Human Trust Behavior?
Chengxing Xie*, Canyu Chen*, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li. (*equal contributions)
Published in Proceedings of the 38th Conference on Neural Information Processing Systems ( NeurIPS 2024 )
Also presented in workshop AGI@ICLR 2024 and NLP+CSS@NAACL 2024
Seventeenth Midwest Speech and Language Days Symposium ( MSLD 2024, Oral )
The First Workshop on AI Behavioral Science ( AIBS@KDD 2024, Oral ).
[arXiv] [project website] [slides] [code and results]
Demos on HuggingFace: [Trust Game Demo] [Repeated Trust Game Demo]
Invited Talks : [Swarma Club] [Knowledge Lab in the University of Chicago] -
Can Editing LLMs Inject Harm?
Canyu Chen*, Baixiang Huang*, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, Xifeng Yan, William Yang Wang, Philip Torr, Dawn Song, Kai Shu (*equal contributions)
Presented in workshop TiFA@ICML 2024, Lightning Talk and NextGenAISafety@ICML 2024.
[arXiv] [project website] [poster] [Code, Results, and Dataset] [YouTube]
🏆 Award: Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk] [Guest Lecture for CSCI 8265: Trustworthy Machine Learning@UGA] [Zhejiang University] [Guest Lecture for CSCI 8000 Advanced Special Topics in Computer Science@UGA]
Included in Tutorial : [Knowledge Editing for Large Language Models@IJCAI 2024] -
Combating Misinformation in the Age of LLMs: Opportunities and Challenges
Canyu Chen, Kai Shu.
Published in AI Magazine 2024 (Volume 45, Issue 3, Fall 2024), Highlight Article.
[publication] [arXiv] [project website] [Slides] [paper list] [YouTube]
Media Coverage : [Marktechpost AI Research News] [Reddit r/machinelearningnews] [Analytics Vidhya Blog].
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk] [KAIST/IBS Data Science Group] [Psych Methods] [Guest Lecture for CSCI 8265: Trustworthy Machine Learning@UGA] [Zhejiang University] [Guest Lecture for CSCI 8000 Advanced Special Topics in Computer Science@UGA].
-
Can LLM-Generated Misinformation Be Detected?
Canyu Chen, Kai Shu.
Published in Proceedings of The Twelfth International Conference on Learning Representations ( ICLR 2024 )
Also presented in workshop RegML@NeurIPS 2023, Oral and ICBINB@NeurIPS 2023, spotlight, and the symposium AGI Leap Summit 2024.
[publication] [arXiv] [project website] [dataset and code] [Slides] [YouTube] [zhihu] [twitter/x.com] [LinkedIn]
🏆 Award: Didactic Paper Award in the workshop ICBINB@NeurIPS 2023 (1/35 of all accepted papers).
🏆 Award: Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
🏆 Award: Spotlight Research in AGI Leap Summit 2024.
🏆 Award: Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
Included in the curriculum at: [The City University of New York].
Included in Tutorial: [Defending Against Generative AI Threats in NLP@SBP-BRiMS 2024]. [Preventing and Detecting Misinformation Generated by Large Language Models@SIGIR 2024].
Media Coverage : [The Register] [LLM Security] [Blog 1] [Blog 2].
Invited Talks : [Berkeley Decentralization & AI Summit Research Spotlight Talk] [AGI Leap Summit Spotlight Research Talk] [Tsinghua AI Time] [Psych Methods] [KAIST/IBS Data Science Group] [Guest Lecture for CSCI 8265: Trustworthy Machine Learning@UGA] [Zhejiang University] [Guest Lecture for CSCI 8000 Advanced Special Topics in Computer Science@UGA]. -
Authorship Attribution in the Era of LLMs: Problems, Methodologies, and Challenges
Baixiang Huang, Canyu Chen, Kai Shu.
Published in SIGKDD Explorations 2024
[arXiv] [project website] [Paper list on GitHub] -
Can Large Language Models Identify Authorship?
Baixiang Huang, Canyu Chen, Kai Shu.
Published in Proceedings of Findings of The 2024 Conference on Empirical Methods in Natural Language Processing ( EMNLP 2024, Findings Long Paper )
[arXiv] [project website] [code] -
MetaGAD: Learning to Meta Transfer for Few-shot Graph Anomaly Detection.
Xiongxiao Xu, Kaize Ding, Canyu Chen, Kai Shu.
Published in Proceedings of The 11th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2024)
[arXiv] -
Model Attribution in Machine-Generated Disinformation: A Domain Generalization Approach with Supervised Contrastive Learning
Alimohammad Beigi, Zhen Tan, Nivedh Mudiam, Canyu Chen, Kai Shu, Huan Liu.
Published in Proceedings of The 11th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2024)
[arXiv] -
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Irene Solaiman, Zeerak Talat, William Agnew, Lama Ahmad, Dylan Baker, Su Lin Blodgett, Canyu Chen, Hal Daumé III, Jesse Dodge, Isabella Duan, Ellie Evans, Felix Friedrich, Avijit Ghosh, Usman Gohar, Sara Hooker, Yacine Jernite, Ria Kalluri, Alberto Lusoli, Alina Leidinger, Michelle Lin, Xiuzhu Lin, Sasha Luccioni, Jennifer Mickel, Margaret Mitchell, Jessica Newman, Anaelia Ovalle, Marie-Therese Png, Shubham Singh, Andrew Strait, Lukas Struppek, Arjun Subramonian
Forthcoming in Oxford Handbook on the Foundations and Regulation of Generative AI. Oxford University Press. Jun. 2024.
[arXiv] -
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?
Zhaorun Chen, Yichao Du, Zichen Wen, Yiyang Zhou, Chenhang Cui, Zhenzhen Weng, Haoqin Tu, Chaoqi Wang, Zhengwei Tong, Qinglan Huang, Canyu Chen, Qinghao Ye, Zhihong Zhu, Yuqing Zhang, Jiawei Zhou, Zhuokai Zhao, Rafael Rafailov, Chelsea Finn, Huaxiu Yao
Presented in workshop FM-Wild@ICML 2024.
[arXiv] [project website] [Code] [Models, Datasets, and Leaderboard on huggingface]
🏆 Award: Top #1 Paper of the day at HuggingFace AK Daily Papers (Jul.9, 2024). -
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li, Bohan Jiang, Liangjie Huang, Alimohammad Beigi, Chengshuai Zhao, Zhen Tan, Amrita Bhattacharjee, Yuxuan Jiang, Canyu Chen, Tianhao Wu, Kai Shu, Lu Cheng, Huan Liu
arXiv preprint. Nov. 2024.
[arXiv] [project website] [Paper List on GitHub] -
FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks
Peiran Wu, Che Liu, Canyu Chen, Jun Li, Cosmin I. Bercea, Rossella Arcucci
arXiv preprint. Oct. 2024.
[arXiv] -
FairMindSim: Alignment of Behavior, Emotion, and Belief in Humans and LLM Agents Amid Ethical Dilemmas
Yu Lei, Hao Liu, Chengxing Xie, Songjia Liu, Zhiyu Yin, Canyu Chen, Guohao Li, Philip Torr, Zhen Wu
arXiv preprint. Oct. 2024.
[arXiv] [code] -
SST: Multi-Scale Hybrid Mamba-Transformer Experts for Long-Short Range Time Series Forecasting
Xiongxiao Xu, Canyu Chen, Yueqing Liang, Baixiang Huang, Guangji Bai, Liang Zhao, Kai Shu.
arXiv preprint. Aug. 2024.
[arXiv] -
Introducing v0.5 of the AI Safety Benchmark from MLCommons
MLCommons AI Safety Working Group
arXiv preprint. Apr. 2024.
[arXiv] [official blog]
Media Coverage : [IEEE Spectrum] [AK Daily Papers] [Marktechpost] [AI Business] [EnterpriseAI News] [HPCwire] [Hackster.io] [ELBLOG.PL] [SiliconANGLE] [GoatStack.ai].
2023
-
PromptDA: Label-guided Data Augmentation for Prompt-based Few-shot Learners.
Canyu Chen, Kai Shu.
Published in Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics ( EACL 2023, Main Conference Long Paper )
Also presented in workshop ENLSP@NeurIPS 2022, Oral (spotlight).
[arXiv] [code] [youtube] [bilibili] [slides] [poster] -
Fair Classification via Domain Adaptation: A Dual Adversarial Learning Approach.
Yueqing Liang, Canyu Chen, Tian Tian, Kai Shu.
Published in Frontiers in Big Data 2023 .
[publication] [arXiv] -
Attacking Fake News Detectors via Manipulating News Social Engagement.
Haoran Wang, Yingtong Dou, Canyu Chen, Lichao Sun, Philip S. Yu, Kai Shu.
Published in Proceedings of The ACM Web Conference 2023 ( WWW 2023 ).
[arXiv] [code]
Media Coverage : [Montreal AI Ethics Institute].
2022
-
Combating Health Misinformation in Social Media: Characterization, Detection, Intervention, and Open Issues.
Canyu Chen*, Haoran Wang*, Matthew Shapiro, Yunyu Xiao, Fei Wang, Kai Shu. (*equal contributions)
arXiv preprint. Nov. 2022.
[arXiv] -
When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes.
Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu.
Presented in workshop TSRML@NeurIPS 2022 and AFCP@NeurIPS 2022.
[arXiv] [Video] [Slides] [Poster]
Media Coverage : [Illinois Tech News].
-
Artificial Intelligence Algorithms for Treatment of Diabetes.
Mudassir M. Rashid, Mohammad Reza Askari, Canyu Chen, Yueqing Liang, Kai Shu, Ali Cinar.
Published in Algorithms 2022 .
[Paper] -
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs.
Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu.
Published in Proceedings of the 36th Conference on Neural Information Processing Systems ( NeurIPS 2022 ), Datasets and Benchmarks Track.
[arXiv] [code]
Invited Talks
- [11/26/2024] Guest lecture titled "Combating Misinformation in the Age of LLMs" for the course "CS 585 Natural Language Processing" instructed by Prof. Jacek Dzikowski at Illinois Institute of Technology. [Slides]
- [11/25/2024] Guest lecture titled "Combating Misinformation in the Age of LLMs" for the course "CSDS 600 Introduction to Artificial Intelligence" invited by Prof. Xiaotian (Max) Han at Case Western Reserve University. [Slides]
- [11/07/2024] Guest lecture titled "Combating Misinformation in the Age of LLMs" for the course "CSCI 8000 Advanced Special Topics in Computer Science" invited by Prof. Zhen Xiang at University of Georgia. [Slides]
- [11/04/2024] Invited to give a talk titled "Can Large Language Model Agents Simulate Human Trust Behavior?" at Prof. James Evans's Knowledge Lab in the University of Chicago. [Slides]
-
[10/29/2024] "Combating Misinformation in the Age of LLMs" invited by Prof. Tianhang Zheng at Zhejiang University
[Poster]
[Slides]
- [10/22/2024] Guest lecture titled "Combating Misinformation in the Age of LLMs" invited by Prof. Ninghao Liu for the course CSCI 8265: Trustworthy Machine Learning (2024 fall) at University of Georgia [Slides]
-
[08/06/2024] Research Spotlight talk titled "Combating Misinformation in the Age of LLMs" in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI,
hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
[Slides]
[YouTube]
-
[06/26/2024] "Can Large Language Model Agents Simulate Human Trust Behaviors?" invited by Prof. Yuxuan Liang at Swarma Club
[Slides]
-
[05/10/2024] "Combating Misinformation in the Age of Large Language Models (LLMs)" invited by Wenchao Dong at KAIST/IBS Data Science Group
[Slides]
-
[02/29/2024] Spotlight Research talk titled "Can LLM-Generated Misinformation Be Detected?" in AGI Leap Summit 2024
-
[04/18/2023] Guest lecture titled "Fairness in AI: An Introduction" invited by Prof. Lu Cheng for CS 483 Big Data Mining (2023 Spring) at UIC
[Slides]
Awards and Fellowship
- Outstanding reviewer at EMNLP 2024.
- Highlight Article in AI Magazine (Volume 45, Issue 3, Fall 2024) .
- Research Spotlight in The 2024 Summit on Responsible Decentralized Intelligence —— Future of Decentralization and AI, hosted by The Berkeley Center for Responsible, Decentralized Intelligence (Berkeley RDI)
- Great Review at ACL Rolling Review 2024 April
- Travel Award for Seventeenth Midwest Speech and Language Days (MSLD 2024)
- Sigma Xi Student Research Award 2024 from Illinois Tech and the local Sigma Xi chapter. ( An award of $500 is given each year to up to two graduate students at Illinois Tech who have demonstrated significant promise in research and scholarship through their accomplishments. There is only one awardee across the whole university in 2024. )
- Technical AI Safety Fellowship 2024 Spring from Harvard AI Safety Student Team.
- Third Place Award in the Illinois Tech College of Computing Poster Session 2024 (Ph.D. Group).
- Spotlight Research in the symposium AGI Leap Summit 2024.
- Didactic Paper Award (1/35 of all accepted papers) in the workshop ICBINB@NeurIPS 2023.
- NeurIPS 2023 Volunteer Award.
Media Coverage
- Illinois Tech Today: "Recognizing the Outstanding Work of Our Illinois Tech Faculty"
- Marktechpost AI Research News: "This AI Report from the Illinois Institute of Technology Presents Opportunities and Challenges of Combating Misinformation with LLMs"
- The Register: "It's true, LLMs are better than people – at creating convincing misinformation"
- Illinois Tech News: "Breaking Biases"
- Montreal AI Ethics Institute: "Attacking Fake News Detectors via Manipulating News Social Engagement"
- IEEE Spectrum: "Announcing a Benchmark to Improve AI Safety: MLCommons has made benchmarks for AI performance—now it's time to measure safety"